Magistral Small 2506 Reasoning 24B NEO MAX Imatrix GGUF
Apache-2.0
An inference model based on Mistral, optimizing inference and output generation capabilities through NEO Imatrix quantization and MAX output tensors
Large Language Model Supports Multiple Languages